GUEST OPINION: This week’s inquiry into the use of Deep Fakes in Australian schools, combined with the increasing use of Deep Fakes by political figures both in Australia and abroad, are worrisome developments. It signals we are in a rapidly evolving arms race to research and develop effective and reliable deep fake detection tools.
COMPANY NEWS: iTWire met with Andrew Slavkovic, the Solutions Engineering Director for Australia and New Zealand at CyberArk.
We discussed the company, how cybercriminals are using AI to improve their cyberattacks and how organisations can defend themselves against them... including by using AI themselves!
How did CyberArk first come about?
CyberArk started 24 years ago with the explicit aim of protecting privileged identities which even back then we identified as being a critical element of any attack.
Why is CyberArk associated with identity security?
It's just a natural evolution from privileged identities into today's modern world. All identities, under certain conditions, need to be protected whilst also being able to achieve varying degrees of actions, which are then classified as “privileged” in today’s world.
How can generative AI like ChatGPT impact cybersecurity?
Generative AI models (such as ChatGPT) will impact the way cybersecurity is conducted in the future. Organisations who don’t leverage this technology run the risk of losing the competitive advantage that will be afforded by it, to the extent we are now seeing them create policies around its usage.
The first step though must be to evaluate both the risks and benefits, and impact those have on individual organisations. We are seeing indicators for its potential impact right now. For example, the technology in its current maturity level is being used to better streamline social engineering campaigns - making them more realistic and believable. The technology is great at generating text or voice messages to impersonate individuals when compared to traditional approaches.
As we have also demonstrated as part of our CyberArk Labs research, it has the ability to create better malware - by allowing generative AI to take existing samples and mutate them into different variations. As a result, changes to the malware signature may result in higher levels of evasion.
The good news is the same technology can also be used to uplift cybersecurity capabilities in the future.
In what ways can enterprises leverage the capabilities of AI to improve cybersecurity processes?
We are already starting to see enterprises reimagine how technologies like generative AI can be used to achieve operational efficiencies, defend against cyber-attacks and gain a competitive advantage.
The perfect storm is going on right now with the maturing of AI models amidst accelerated digital transformation and advancements in quantum computing. The combination of these will completely change the way cybersecurity will be conducted in the future.
Enterprises will be able to leverage generative AI to better defend against cyberattacks and outsource some of the heavy lifting in which security professionals were previously responsible for. Specifically, there are three areas I would see where the technology could be used to help security professionals:
1) To write policy and training documents which better align with industry best practices.
2) Analyse large data sets and find anomalous patterns which would be useful in then strengthening security control framework or in conducting post incident analysis more efficiently.
3) To simplify authentication processes by using AI to determine when to apply additional security controls based on the behaviour of the identity.
What ethical considerations should businesses keep in mind?
As generative AI technologies evolve, businesses must consider the ethical considerations that the technology poses. There are areas which would require careful consideration:
Bias and Discrimination: AI models can perpetuate and amplify existing biases in the data used to train them. This can result in discriminatory outcomes, such as bias in hiring or learning practices. For example, if the data set is the whole of the internet, it may inherit the bias of the material.
Privacy: AI models have the potential to generate content that includes sensitive information, such as personal names, addresses, or financial details. This has been demonstrated recently by individuals managing to circumnavigate controls within the technology. Businesses must ensure that this information is adequately secured and that the AI models are not used to violate privacy rights for that country.
Transparency: AI models can generate complex decisions that are difficult to understand or explain. Businesses must ensure that the models are transparent and that the outcomes generated by the models are clearly communicated to stakeholders.
What are the ways in which threat actors can use ChatGPT or AI to enhance or deploy attacks?
There has been a lot written about how AI can be used to enhance cyberattacks. We would need to look at this through two lenses. Firstly, there’s the maturity of generative AI now and where it is potentially heading in the future. Today, we see it being used more in a supplementary manner rather than a mature technology in which we can just outsource the entire attack to. Specifically, we are seeing benefit of this technology being the ability to shorten activities and tasks in each phase of the attack life cycle regardless of the attacker’s skill set.
For example, during the reconnaissance phase, it can be used to gather intelligence more efficiently for the planning of the subsequent stages of the attack. It could be used to create a report on all open-source intelligence regarding specific technical aspects of a target website and work out key contracts that would be ideal for social engineering attempts.
Within that attack life cycle itself it could also be used to carry out very specific tasks, more efficiently and effectively, redirecting the attacker’s energy to tasks that require more expertise. For example, during the ‘weaponization and delivery’ phase, we could use ChatGPT to generate polymorphic malware based on a sample that’s already well known. This would aid the attack in creating malware that will have higher changes of evading detecting and achieve its objectives. We have demonstrated at CyberArk labs how this technology can you use to create this type of malware and how it can successfully evade traditional security controls.
How does Cyber Ark protect against deep fakes and voice cloning?
Deep fakes and AI voice cloning have not been around long. I think the first reported case was in 2019, when thieves recorded the voice of an executive, at a parent company of an undisclosed UK-based energy firm. The CEO actually recognised their colleague’s accent, speech, cadence, etc and quickly transferred all funds from an account as requested. Obviously, it turned out to be to be a fake.
There's been a lot of work done where the FBI has actually mandated that each image has to have an embedded watermark to distinguish it from being a deep fake, locally. In Australia, we're only starting to catch up now, with similar attacks. Although they originate as a new source, what they do is still the same. So, having those consistent security controls, a layered defence in-depth approach, will still prevent or detect these attacks.
Financial gain, which is generally the purpose of these types of attacks, needs to be kept out of email inboxes and phone conversations. Instead, establish processes and procedures which are supplemented by security controls layer onto the application itself using Identity as the cornerstone for verifying every action.
How can potential partners work with you?
We have an extensive partner community. We have an MSP programme as well. So, all partnership models are welcome.
Is there anything else you'd like to add?
I think it's almost the perfect storm, with the emergence and maturing of generative AI, deep fakes, digital transformation initiatives and advancements in quantum computing. This is really going to be the launch pad for a new era in cybersecurity.
I think organisations now need to start looking at the governance around how they're going to use the technology. I think banning it is the wrong approach and will leave organisations behind, so we must allow its use. We should look at how the same technology can be used to solve the problems that it has also created.
Andrew Slavkovic, thank you very much. You can find out more by visiting CyberArk’s website.
GUEST OPINION The digital world is evolving quickly with artificial intelligence and quantum computing expected to soon become a reality. When it comes to cybersecurity, we cannot just sit back and wait for these changes to occur.
Conversational and cognitive AI innovator, Nuance, has announced a new AI engine "combining voice biometrics and natural language understanding technologies to authenticate people on-the-fly, or to identify deep fake voices—significantly cutting authentication time while anticipating individual needs with a personalised yet automated response".
Most cybersecurity is making up for weak platforms. We need to address the fundamentals, design platforms that prevent out-of-bounds access[…]
For most developers the security/performance trade off is still the hardest one to tackle, even as the cost of processing[…]
RISC has been overhyped. While it is an interesting low-level processor architecture, what the world needs is high-level system architectures,[…]
There are two flaws that are widespread in the industry here. The first is that any platform or language should[…]
Ajai Chowdhry, one of the founders and CEO of HCL is married to a cousin of a cousin of mine.[…]